在处理机器学习模型(例如图形神经网络(GNN))中的一批图表时,通常将几个小图组合到一个整体图中以加速处理并减少填充的开销。例如,这是PYG库中支持的。但是,小图的尺寸对于节点和边缘的数量可能会有很大的变化,因此,组合图的大小仍然可能有很大差异,尤其是对于小批量大小而言。因此,仍然产生过多的填充和浪费计算的成本。本文提出了一种新方法 - 元组包装 - 用于生成导致最小开销的批次。该算法扩展了最近引入的序列填料方法,以在(| nodes |,| edges |)的2D元组上工作。单调启发式词被应用于元组值的2D直方图,以定义填充直方图箱的优先级,以及目标以达到节点数量和边缘数量的限制。实验验证了多个数据集上算法的有效性。
translated by 谷歌翻译
目的:分类器传输通常带有数据集偏移。为了克服它们,必须采用在线策略。对于实际应用,必须考虑用于适应批处理学习算法(例如SVM)的计算资源的局限性。方法:我们审查并比较了在线学习的几种策略与SVM。我们专注于限制存储培训数据大小的数据选择策略[...]主要结果:对于不同的数据移动,不同的标准是合适的。对于合成数据,将所有样品添加到所考虑的样品库中的性能通常比其他标准差得多。特别是,仅添加错误分类的样本表现出色。在这里,当其他标准没有得到很好的选择时,平衡标准非常重要。对于转移设置,结果表明,最佳策略取决于转移过程中漂移的强度。添加全部并删除最古老的样品会导致最佳性能,而对于较小的漂移,仅添加SVM的潜在新支持向量就足以减少处理资源。意义:对于基于脑电图模型的BCIS,使用了校准会话中的数据,先前的录制会话,甚至是与一个或其他主题的录音会话进行培训。学习模型的这种转移通常会降低性能,因此可以从在线学习中受益,从而适应了像已建立的SVM这样的分类器。我们表明,通过使用正确的数据选择标准组合,可以适应分类器并在很大程度上提高性能。此外,在某些情况下,可以通过使用特殊样本的子集更新并保留一小部分样品来训练分类器来加快处理并节省计算。
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
培训和评估语言模型越来越多地要求构建元数据 - 多样化的策划数据收集,并具有清晰的出处。自然语言提示最近通过将现有的,有监督的数据集转换为多种新颖的预处理任务,突出了元数据策划的好处,从而改善了零击的概括。尽管将这些以数据为中心的方法转化为生物医学语言建模的通用域文本成功,但由于标记的生物医学数据集在流行的数据中心中的代表性大大不足,因此仍然具有挑战性。为了应对这一挑战,我们介绍了BigBio一个由126个以上的生物医学NLP数据集的社区库,目前涵盖12个任务类别和10多种语言。 BigBio通过对数据集及其元数据进行程序化访问来促进可再现的元数据策划,并与当前的平台兼容,以及时工程和端到端的几个/零射击语言模型评估。我们讨论了我们的任务架构协调,数据审核,贡献指南的过程,并概述了两个说明性用例:生物医学提示和大规模,多任务学习的零射门评估。 BigBio是一项持续的社区努力,可在https://github.com/bigscience-workshop/biomedical上获得。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
门控相机作为扫描LIDAR传感器的替代方案,具有高分辨率的3D深度,在雾,雪和雨中稳健。不是通过光子飞行时间顺序地扫描场景并直接记录深度,如在脉冲激光雷达传感器中,所设定的成像器编码在百万像素分辨率的少量门控切片中的相对强度的深度。尽管现有方法表明,可以从这些测量中解码高分辨率深度,但这些方法需要同步和校准的LIDAR来监督门控深度解码器 - 禁止在地理位置上快速采用,在大型未配对数据集上培训,以及探索替代应用程序外面的汽车用例。在这项工作中,我们填补了这个差距并提出了一种完全自我监督的深度估计方法,它使用门控强度配置文件和时间一致性作为训练信号。所提出的模型从门控视频序列培训结束到结束,不需要LIDAR或RGB数据,并学会估计绝对深度值。我们将门控切片作为输入和解散估计场景,深度和环境光,然后用于学习通过循环损耗来重建输入切片。我们依赖于给定帧和相邻门控切片之间的时间一致性,以在具有阴影和反射的区域中估计深度。我们通过实验验证,所提出的方法优于基于单眼RGB和立体图像的现有监督和自我监督的深度估计方法,以及基于门控图像的监督方法。
translated by 谷歌翻译
猜测损耗曲线的平坦度被猜测以连接到机器学习模型的泛化能力,特别是神经网络。虽然已经经验观察到,平坦度措施与泛化持续强烈地相关,但仍然是一个开放的理论问题,为什么和在这种情况下,在这种情况下,平坦度与泛化相连,特别是根据改变某些平坦度措施但仍然不变的regarameteration。我们通过将其与来自代表性数据的插值相关联的平整度和泛化之间的联系,从而导出代表性的概念,并具有鲁棒性。概念允许我们严格地连接平坦度和泛化,并识别连接保持的条件。此外,它们产生了一种新颖,但自然的相对平坦度量,泛化强烈地相关,简化了普通最小二乘的脊回归,并解决了重新支柱化问题。
translated by 谷歌翻译
Machine learning (ML) has become a core component of many real-world applications and training data is a key factor that drives current progress. This huge success has led Internet companies to deploy machine learning as a service (MLaaS). Recently, the first membership inference attack has shown that extraction of information on the training set is possible in such MLaaS settings, which has severe security and privacy implications.However, the early demonstrations of the feasibility of such attacks have many assumptions on the adversary, such as using multiple so-called shadow models, knowledge of the target model structure, and having a dataset from the same distribution as the target model's training data. We relax all these key assumptions, thereby showing that such attacks are very broadly applicable at low cost and thereby pose a more severe risk than previously thought. We present the most comprehensive study so far on this emerging and developing threat using eight diverse datasets which show the viability of the proposed attacks across domains.In addition, we propose the first effective defense mechanisms against such broader class of membership inference attacks that maintain a high level of utility of the ML model.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
We present a dynamic path planning algorithm to navigate an amphibious rotor craft through a concave time-invariant obstacle field while attempting to minimize energy usage. We create a nonlinear quaternion state model that represents the rotor craft dynamics above and below the water. The 6 degree of freedom dynamics used within a layered architecture to generate motion paths for the vehicle to follow and the required control inputs. The rotor craft has a 3 dimensional map of its surroundings that is updated via limited range onboard sensor readings within the current medium (air or water). Path planning is done via PRM and D* Lite.
translated by 谷歌翻译